Goto

Collaborating Authors

 probabilistic safety



Probabilistic Safety for Bayesian Neural Networks

Wicker, Matthew, Laurenti, Luca, Patane, Andrea, Kwiatkowska, Marta

arXiv.org Machine Learning

We study probabilistic safety for Bayesian Neural Networks (BNNs) under adversarial input perturbations. Given a compact set of input points, $T \subseteq \mathbb{R}^m$, we study the probability w.r.t. the BNN posterior that all the points in $T$ are mapped to the same region $S$ in the output space. In particular, this can be used to evaluate the probability that a network sampled from the BNN is vulnerable to adversarial attacks. We rely on relaxation techniques from non-convex optimization to develop a method for computing a lower bound on probabilistic safety for BNNs, deriving explicit procedures for the case of interval and linear function propagation techniques. We apply our methods to BNNs trained on a regression task, airborne collision avoidance, and MNIST, empirically showing that our approach allows one to certify probabilistic safety of BNNs with millions of parameters.


Uncertainty Quantification with Statistical Guarantees in End-to-End Autonomous Driving Control

Michelmore, Rhiannon, Wicker, Matthew, Laurenti, Luca, Cardelli, Luca, Gal, Yarin, Kwiatkowska, Marta

arXiv.org Machine Learning

Uncertainty Quantification with Statistical Guarantees in End-to-End Autonomous Driving Control Rhiannon Michelmore 1, Matthew Wicker 1, Luca Laurenti 1, Luca Cardelli 1, Y arin Gal 1, Marta Kwiatkowska 1 Abstract -- Deep neural network controllers for autonomous driving have recently benefited from significant performance improvements, and have begun deployment in the real world. Prior to their widespread adoption, safety guarantees are needed on the controller behaviour that properly take account of the uncertainty within the model as well as sensor noise. Bayesian neural networks, which assume a prior over the weights, have been shown capable of producing such uncertainty measures, but properties surrounding their safety have not yet been quantified for use in autonomous driving scenarios. In this paper, we develop a framework based on a state-of-the-art simulator for evaluating end-to-end Bayesian controllers. In addition to computing pointwise uncertainty measures that can be computed in real time and with statistical guarantees, we also provide a method for estimating the probability that, given a scenario, the controller keeps the car safe within a finite horizon. We experimentally evaluate the quality of uncertainty computation by three Bayesian inference methods in different scenarios and show how the uncertainty measures can be combined and calibrated for use in collision avoidance. Our results suggest that uncertainty estimates can greatly aid decision making in autonomous driving. I NTRODUCTION Deep Neural Networks (DNNs) have seen a surge in popularity over the past decade, and their use has become widespread in many fields including safety-critical systems such as medical diagnosis and, in particular, autonomous cars. The latter have driven millions of miles without human intervention [1], [2], but offer few safety guarantees.